41 research outputs found

    First-Order Decomposition Trees

    Full text link
    Lifting attempts to speed up probabilistic inference by exploiting symmetries in the model. Exact lifted inference methods, like their propositional counterparts, work by recursively decomposing the model and the problem. In the propositional case, there exist formal structures, such as decomposition trees (dtrees), that represent such a decomposition and allow us to determine the complexity of inference a priori. However, there is currently no equivalent structure nor analogous complexity results for lifted inference. In this paper, we introduce FO-dtrees, which upgrade propositional dtrees to the first-order level. We show how these trees can characterize a lifted inference solution for a probabilistic logical model (in terms of a sequence of lifted operations), and make a theoretical analysis of the complexity of lifted inference in terms of the novel notion of lifted width for the tree

    Highly Stable Multicrown Heterostructures of Type-II Nanoplatelets for Ultralow Threshold Optical Gain

    Get PDF
    Solution-processed type-II quantum wells exhibit outstanding optical properties, which make them promising candidates for light-generating applications including lasers and LEDs. However, they may suffer from poor colloidal stability under ambient conditions and show strong tendency to assemble into face-to-face stacks. In this work, to resolve the colloidal stability and uncontrolled stacking issues, we proposed and synthesized CdSe/CdSe1-xTex/CdS core/multicrown hetero-nanoplatelets (NPLs), controlling the amount of Te up to 50% in the crown without changing their thicknesses, which significantly increases their colloidal and photostability under ambient conditions and at the same time preserving their attractive optical properties. Confirming the final lateral growth of CdS sidewalls with X-ray photoelectron spectroscopy, energy-dispersive analysis, and photoelectron excitation spectroscopy, we found that the successful coating of this CdS crown around the periphery of conventional type-II NPLs prevents the unwanted formation of needle-like stacks, which results in reduction of the undesired scattering losses in thin-film samples of these NPLs. Owing to highly efficient exciton funneling from the outmost CdS crown accompanied by the reduced scattering and very low waveguide loss coefficient (similar to 18 cm(-1)), ultralow optical gain thresholds of multicrown type-II NPLs were achieved to be as low as 4.15 mu J/cm(2) and 2.48 mJ/cm(2) under one- and two-photon absorption pumping, respectively. These findings indicate that the strategy of using engineered advanced heterostructures of nanoplatelets provides solutions for improved colloidal stability and enables enhanced photonic performance

    Assessment of AFT and Cox Models in Analysis of Factors Influencing the survival of Women with Breast Cancer in Yazd city

    Get PDF
    BACKGROUND AND OBJECTIVE: Breast cancer is one of the most common cancers in women. The statistical methods in the survival analysis of these patients are accelerated time models and Cox model. The purpose of this study is to evaluate two models in determining the effective factors in the survival of breast cancer. METHODS: The study was an analytical and cohort study of survival analysis. The 538 of the patients referred to Ramezanzade Radiotherapy Center who had breast cancer and recorded survival status as a census from the April 2005 until March 2012 in Yazd. and survived by phone call. The Kaplan-Meier estimate was used to describe the survival of the patients. The research variables included clinical and demographic factors. The choice of final variables in the model was done by the methods of diminishing the dimension and all possible Cox regressions by the acaian criterion. Then, the best accelerated time model was considered Getting different distributions was also determined by the Akayake criteria. FINDINGS: The most effective Cox model among all Cox models was variables including Age, Her2 and Ki67 variables (AIC = 30270). The generalized gamma model was the most optimal accelerated time model (AIC 463.966). Her2 was significant in both accelerated and cox models(p0.05). CONCLUSION: In both accelerated time- Generalized Gamma- models and Cox Models, the Her2 variable was identified as a risk factor for breast cancer and There is a positive impact on the risk of death and reduced survival

    Lifted Probabilistic Inference by Variable Elimination (Gelifte probabilistische inferentie door variabele eliminatie)

    No full text
    Representing, learning, and reasoning about knowledge are central to artificial intelligence (AI). A long standing goal of AI is unifying logic and probability, to benefit from the strengths of both formalisms. Probability theory allows us to represent and reason in uncertain domains, while first-order logic allows us to represent and reason about structured, relational domains. Many real-world problems exhibit both uncertainty and structure, and thus can be more naturally represented with a combination of probabilistic and logical knowledge. This observation has led to the development of probabilistic logical models (PLMs), which combine probabilistic models with elements of first-order logic, to succinctly capture uncertainty in structured, relational domains, e.g., social networks, citation graphs, etc. While PLMs provide expressive representation formalisms, efficient inference is still a major challenge in these models, as they typically involve a large number of objects and interactions among them. Among the various efforts to address this problem, a promising line of work is lifted probabilistic inference. Lifting attempts to improve the efficiency of inference by exploiting the symmetries in the model. The basic principle of lifting is to perform an inference operation once for a whole group of interchangeable objects, instead of once per individual in the group. Researchers have proposed lifted versions of various (propositional) probabilistic inference algorithms, and shown large speedups achieved by the lifted algorithms over their propositional counterparts. In this dissertation, we make a number of novel contributions to lifted inference, mainly focusing on lifted variable elimination (LVE). First, we focus on constraint processing, which is an integral part of lifted inference. Lifted inference algorithms are commonly tightly coupled to a specific constraint language. We bring more insight in LVE, by decoupling the operators from the used constraint language. We define lifted inference operations so that they operate on the semantic level rather than on the syntactic level, making them language independent. Further, we show how this flexibility allows us to improve the efficiency of inference, by enhancing LVE with a more powerful constraint representation. Second, we generalize the `lifting' tools used by LVE, by introducing a number of novel lifted operators in this algorithm. We show how these operations allow LVE to exploit a broader range of symmetries, and thus expand the range of problems it can solve in a lifted way. Third, we advance our theoretical understanding of lifted inference by providing the first completeness result for LVE. We prove that LVE is complete---always has a lifted solution---for the fragment of 2-logvar models, a model class that can represent many useful relations in PLMs, such as (anti-)symmetry and homophily. This result also shows the importance of our contributions to LVE, as we prove they are sufficient and necessary for LVE to achieve completeness. Fourth, we propose the structure of first-order decomposition trees (FO-dtrees), as a tool for symbolically analyzing lifted inference solutions. We show how FO-dtrees can be used to characterize an LVE solution, in terms of a sequence of lifted operations. We further make a theoretical analysis of the complexity of lifted inference based on a corresponding FO-dtree, which is valuable for finding and selecting among different lifted solutions. Finally, we present a pre-processing method for speeding up (lifted) inference. Our goal with this method is to speed up inference in PLMs by restricting the computations to the requisite part of the model. For this, we build on the Bayes-ball algorithm that identifies the requisite variables in a ground Bayesian network. We present a lifted version of Bayes-ball, which works with first-order Bayesian networks, and show how it applies to lifted inference.nrpages: xvii + 251status: publishe

    Generalized counting for lifted variable elimination

    No full text
    Lifted probabilistic inference methods exploit symmetries in the structure of probabilistic models to perform inference more efficiently. In lifted variable elimination the symmetry among a group of interchangeable random variables is captured by counting formulas, and exploited by operations that handle such formulas. In this paper we generalize the structure of counting formulas and present a set of inference operators that introduce and eliminate these formulas from the model. This generalization expands the range of problems that can be solved in a lifted way. Our work is closely related to the recently introduced method of joint conversion. Due to its more fine grained formulation, however, our approach can provide more efficient solutions than joint conversion.status: publishe

    Biclustering of gene expression data using probabilistic logic learning

    No full text
    We consider the problem of discovering biclusters in gene expression data by means of machine learning. The data contains the measured expression levels of the genes of a particular organism under a number of varying conditions. The learning task given such a dataset is to find subsets of genes that are co-expressed under subsets of conditions (such a subset of genes together with the corresponding subset of conditions is called a bicluster). The problem of biclustering gene expression data has already been tackled using probabilistic model-based biclustering. So far, this approach was implemented in a special-purpose system [1], although there are a number of general-purpose probabilistic modelling systems that also appear suitable for solving this problem. A solution in a general-purpose system would have the advantage of being easily adaptable and extensible, for instance with respect to additional data sources about the considered genes [1]. The goal of this work is to investigate how well the problem of biclustering gene expression data can be solved with a number of general-purpose probabilistic modelling systems. Concretely, we consider so-called probabilistic logic learning (PLL) systems, which use elements of first-order logic for the sake of expressivity. PLL is currently a very popular approach in the artificial intelligence and machine learning community. In our work, we first made an analysis of the modelling- and learning-features required to solve the biclustering problem (such as the ability to deal with numerical data, with overlapping clusters, etc.). Next, we made an overview of which of these features are supported by which PLL systems. In this analysis, the PLL system Alchemy (that deals with so-called Markov Logic) appeared to be the most promising. Hence, we continued by implementing probabilistic model-based biclustering in this system. This work showed that there are several practical problems that make it impossible to represent the desired model in the Alchemy system. We report the problems encountered (limitations of Alchemy) as well as the aspects of the biclustering task that can easily be modelled in Alchemy (strong points of Alchemy). In the light of these limitations and strong points, we also compare Alchemy to the other PLL systems considered in our initial analysis. From the perspective of biological applications, our discussion is relevant in the sense that we give some insight into what kind of problems can and cannot easily be tackled using popular general-purpose systems. From the perspective of informatics, in particular machine learning, our discussion is relevant in the sense that we identify a number of shortcomings of existing systems and corresponding directions for future work. [1] Tim Van den Bulcke et al. Efficient Query-Driven Biclustering of Gene Expression Data Using Probabilistic Relational Models'', ESAT-SISTA Internal Report 08-134, K.U.Leuven, 2008.status: publishe

    Probabilistic logical learning for biclustering: A case study with surprising results

    No full text
    Many approaches to probabilistic logical learning have been proposed by now, and several of these have been implemented into powerful learning and inference systems. Given this state of the art, it appears natural to start using these systems for solving concrete problems. This paper presents some results of a case study where several probabilistic logical learning systems have been applied to a seemingly simple problem that exhibits both probabilistic and relational aspects. The results are surprisingly negative: none of the systems we have tried could adequately handle the problem at hand. We discuss the reasons for this. This leads to several conclusions. First, still more effort must be invested in developing full-fledged implementations that can handle a wide range of realistic problems. Second, the intrinsic limitations of certain approaches may not yet be fully understood. Third, the problem we discuss here may be an interesting application for probabilistic logical learning systems, and we invite other researchers to use it as a benchmark for evaluating the applicability of their favorite systems.nrpages: 8status: publishe
    corecore